179 research outputs found

    Entanglement-guided architectures of machine learning by quantum tensor network

    Full text link
    It is a fundamental, but still elusive question whether the schemes based on quantum mechanics, in particular on quantum entanglement, can be used for classical information processing and machine learning. Even partial answer to this question would bring important insights to both fields of machine learning and quantum mechanics. In this work, we implement simple numerical experiments, related to pattern/images classification, in which we represent the classifiers by many-qubit quantum states written in the matrix product states (MPS). Classical machine learning algorithm is applied to these quantum states to learn the classical data. We explicitly show how quantum entanglement (i.e., single-site and bipartite entanglement) can emerge in such represented images. Entanglement characterizes here the importance of data, and such information are practically used to guide the architecture of MPS, and improve the efficiency. The number of needed qubits can be reduced to less than 1/10 of the original number, which is within the access of the state-of-the-art quantum computers. We expect such numerical experiments could open new paths in charactering classical machine learning algorithms, and at the same time shed lights on the generic quantum simulations/computations of machine learning tasks.Comment: 10 pages, 5 figure

    Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays.

    Get PDF
    Resistive RAM crossbar arrays offer an attractive solution to minimize off-chip data transfer and parallelize on-chip computations for neural networks. Here, we report a hardware/software co-design approach based on low energy subquantum conductive bridging RAM (CBRAM®) devices and a network pruning technique to reduce network level energy consumption. First, we demonstrate low energy subquantum CBRAM devices exhibiting gradual switching characteristics important for implementing weight updates in hardware during unsupervised learning. Then we develop a network pruning algorithm that can be employed during training, different from previous network pruning approaches applied for inference only. Using a 512 kbit subquantum CBRAM array, we experimentally demonstrate high recognition accuracy on the MNIST dataset for digital implementation of unsupervised learning. Our hardware/software co-design approach can pave the way towards resistive memory based neuro-inspired systems that can autonomously learn and process information in power-limited settings

    Measuring The User Experience And Its Importance To Customer Satisfaction: An Empirical Stusy For Telecom e-Service Websites

    Get PDF
    In telecom settings, using e-service website has become an increasingly common activity among mobile users. As an important channel, website users experience that quality plays a key role for e-service or business successes. With the use of an online structured questionnaire, a total of 20,040 were surveyed to answer the questions in thirty-one provinces in China. With methods of Principal Component Analysis, a five-factor e-service website user experience questionnaire was examined, and the factors of perceived functional completion, perceived websites performance, quality of interface and interaction, quality of content and information, and quality of online customer support or service were found effectively to measure e-service website user experience quality. In addition, all of these five aspects in e-service website user experience were found to be significant in predicting overall customer satisfaction

    One-Bit Byzantine-Tolerant Distributed Learning via Over-the-Air Computation

    Full text link
    Distributed learning has become a promising computational parallelism paradigm that enables a wide scope of intelligent applications from the Internet of Things (IoT) to autonomous driving and the healthcare industry. This paper studies distributed learning in wireless data center networks, which contain a central edge server and multiple edge workers to collaboratively train a shared global model and benefit from parallel computing. However, the distributed nature causes the vulnerability of the learning process to faults and adversarial attacks from Byzantine edge workers, as well as the severe communication and computation overhead induced by the periodical information exchange process. To achieve fast and reliable model aggregation in the presence of Byzantine attacks, we develop a signed stochastic gradient descent (SignSGD)-based Hierarchical Vote framework via over-the-air computation (AirComp), where one voting process is performed locally at the wireless edge by taking advantage of Bernoulli coding while the other is operated over-the-air at the central edge server by utilizing the waveform superposition property of the multiple-access channels. We comprehensively analyze the proposed framework on the impacts including Byzantine attacks and the wireless environment (channel fading and receiver noise), followed by characterizing the convergence behavior under non-convex settings. Simulation results validate our theoretical achievements and demonstrate the robustness of our proposed framework in the presence of Byzantine attacks and receiver noise.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Correct order on some certain weighted representation functions

    Full text link
    Let N\mathbb{N} be the set of all nonnegative integers. For any positive integer kk and any subset AA of nonnegative integers, let r1,k(A,n)r_{1,k}(A,n) be the number of solutions (a1,a2)(a_1,a_2) to the equation n=a1+ka2n=a_1+ka_2. In 2016, Qu proved that lim infnr1,k(A,n)=\liminf_{n\rightarrow\infty}r_{1,k}(A,n)=\infty providing that r1,k(A,n)=r1,k(NA,n)r_{1,k}(A,n)=r_{1,k}(\mathbb{N}\setminus A,n) for all sufficiently large integers, which answered affirmatively a 2012 problem of Yang and Chen. In a very recent article, another Chen (the first named author) slightly improved Qu's result and obtained that lim infnr1,k(A,n)logn>0.\liminf_{n\rightarrow\infty}\frac{r_{1,k}(A,n)}{\log n}>0. In this note, we further improve the lower bound on r1,k(A,n)r_{1,k}(A,n) by showing that lim infnr1,k(A,n)n>0.\liminf_{n\rightarrow\infty}\frac{r_{1,k}(A,n)}{n}>0. Our bound reflects the correct order of magnitude of the representation function r1,k(A,n)r_{1,k}(A,n) under the above restrictions due to the trivial fact that $r_{1,k}(A,n)\le n/k.

    FocalDreamer: Text-driven 3D Editing via Focal-fusion Assembly

    Full text link
    While text-3D editing has made significant strides in leveraging score distillation sampling, emerging approaches still fall short in delivering separable, precise and consistent outcomes that are vital to content creation. In response, we introduce FocalDreamer, a framework that merges base shape with editable parts according to text prompts for fine-grained editing within desired regions. Specifically, equipped with geometry union and dual-path rendering, FocalDreamer assembles independent 3D parts into a complete object, tailored for convenient instance reuse and part-wise control. We propose geometric focal loss and style consistency regularization, which encourage focal fusion and congruent overall appearance. Furthermore, FocalDreamer generates high-fidelity geometry and PBR textures which are compatible with widely-used graphics engines. Extensive experiments have highlighted the superior editing capabilities of FocalDreamer in both quantitative and qualitative evaluations.Comment: Project website: https://focaldreamer.github.i
    corecore